Autoencoders, Unsupervised Learning, and Deep Architectures
نویسنده
چکیده
Autoencoders play a fundamental role in unsupervised learning and in deep architectures for transfer learning and other tasks. In spite of their fundamental role, only linear autoencoders over the real numbers have been solved analytically. Here we present a general mathematical framework for the study of both linear and non-linear autoencoders. The framework allows one to derive an analytical treatment for the most non-linear autoencoder, the Boolean autoencoder. Learning in the Boolean autoencoder is equivalent to a clustering problem that can be solved in polynomial time when the number of clusters is small and becomes NP complete when the number of clusters is large. The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory.
منابع مشابه
Churn analysis using deep convolutional neural networks and autoencoders
Customer temporal behavioral data was represented as images in order to perform churn prediction by leveraging deep learning architectures prominent in image classification. Supervised learning was performed on labeled data of over 6 million customers using deep convolutional neural networks, which achieved an AUC of 0.743 on the test dataset using no more than 12 temporal features for each cus...
متن کاملEvaluating deep variational autoencoders trained on pan-cancer gene expression
Cancer is a heterogeneous disease with diverse molecular etiologies and outcomes. The Cancer Genome Atlas (TCGA) has released a large compendium of over 10,000 tumors with RNA-seq gene expression measurements. Gene expression captures the diverse molecular profiles of tumors and can be interrogated to reveal differential pathway activations. Deep unsupervised models, including Variational Autoe...
متن کاملWinner-Take-All Autoencoders
In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of ...
متن کاملStacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thu...
متن کاملA Winner-Take-All Method for Training Sparse Convolutional Autoencoders
We explore combining the benefits of convolutional architectures and autoencoders for learning deep representations in an unsupervised manner. A major challenge is to achieve appropriate sparsity among hidden variables, since neighbouring variables in each feature map tend to be highly correlated and a suppression mechanism is therefore needed. Previously, deconvolutional networks and convoluti...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012